Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
People attempting to immigrate to the U.S. (through a port of entry or other means) may be required to accept various forms of surveillance technologies after interacting with immigration officials. In March 2025, around 160,000 people in the U.S. were required to use a smartphone application—BI SmartLINK—that uses facial recognition, voice recognition, and location tracking; others were assigned an ankle monitor or a smartwatch. These compulsory surveillance technologies exist under Immigration and Custom Enforcement (ICE)’s Alternatives to Detention (ATD) program, a combination of surveillance technologies, home visits, and in-person meetings with ICE officials and third-party “case specialists.” For migrants in the U.S. who are already facing multiple other challenges, such as securing housing, work, or healthcare, the surveillance technologies administered under ATD introduce new challenges. To understand the challenges facing migrants using BI SmartLINK under ATD, their questions about the app, and what role technologists might play (if any) in addressing these challenges, we conducted an interview study (n=9) with immigrant rights advocates. These advocates have collectively supported thousands of migrants over their careers and witnessed firsthand their struggles with surveillance tech under ATD. Among other things, our findings highlight how surveillance tech exacerbates the power imbalance between migrants and ICE officials (or their proxies), how these technologies (negatively) impact migrants, and how migrants and their advocates struggle to understand how the technologies that surveil them function. Our findings regarding the harms experienced by migrants lead us to believe that BI SmartLINK should not be used, and these harms fundamentally cannot be addressed by improvements to the app’s functionality or design. However, as this technology is currently deployed, we end by highlighting intervention opportunities for technologists to use our findings to make these high-stakes technologies less opaque for migrants and their advocates.more » « lessFree, publicly-accessible full text available June 23, 2026
-
Free, publicly-accessible full text available May 12, 2026
-
We applied techniques from psychology --- typically used to visualize human bias --- to facial analysis systems, providing novel approaches for diagnosing and communicating algorithmic bias. First, we aggregated a diverse corpus of human facial images (N=1492) with self-identified gender and race. We tested four automated gender recognition (AGR) systems and found that some exhibited intersectional gender-by-race biases. Employing a technique developed by psychologists --- face averaging --- we created composite images to visualize these systems' outputs. For example, we visualized what an average woman looks like, according to a system's output. Second, we conducted two online experiments wherein participants judged the bias of hypothetical AGR systems. The first experiment involved participants (N=228) from a convenience sample. When depicting the same results in different formats, facial visualizations communicated bias to the same magnitude as statistics. In the second experiment with only Black participants (N=223), facial visualizations communicated bias significantly more than statistics, suggesting that face averages are meaningful for communicating algorithmic bias.more » « less
-
Deceptive design patterns (sometimes called “dark patterns”) are user interface design elements that may trick, deceive, or mislead users into behaviors that often benefit the party implementing the design over the end user. Prior work has taxonomized, investigated, and measured the prevalence of such patterns primarily in visual user interfaces (e.g., on websites). However, as the ubiquity of voice assistants and other voice-assisted technologies increases, we must anticipate how deceptive designs will be (and indeed, are already) deployed in voice interactions. This paper makes two contributions towards characterizing and surfacing deceptive design patterns in voice interfaces. First, we make a conceptual contribution, identifying key characteristics of voice interfaces that may enable deceptive design patterns, and surfacing existing and theoretical examples of such patterns. Second, we present the findings from a scenario-based user survey with 93 participants, in which we investigate participants’ perceptions of voice interfaces that we consider to be both deceptive and non-deceptive.more » « less
An official website of the United States government

Full Text Available